97 research outputs found

    Imagined Communities and Identities in English as a Foreign Language (EFL) Learning: A Literature Review

    Get PDF
    Imagined community and identity have been recognized as critical aspects in English language learning. Imagined community refers to the ideal community that learners wish to get engaged in, while imagined identity refers to the ideal self that language learners wish to become in the future. However, there is a scant research on these two notions in relation to English as a foreign language (EFL) learning. To that end, this paper aims to present the literature review of the contemporary theories on imagined communities and identities in EFL learning. It first discusses the imagined communities regarding the functions, community of practice, notions of imagined communities and concepts of imagined EFL classroom communities. It then scrutinizes imagined identities in terms of poststructuralists’ theory, English language learners’ identities, notion of imagined identity and EFL learners’ imagined identity. This paper is hoped to provide a timely and needed conceptual framework for other relevant constructs (e.g., English language learning investment) in English language learning

    Heterogeneous ensemble selection for evolving data streams.

    Get PDF
    Ensemble learning has been widely applied to both batch data classification and streaming data classification. For the latter setting, most existing ensemble systems are homogenous, which means they are generated from only one type of learning model. In contrast, by combining several types of different learning models, a heterogeneous ensemble system can achieve greater diversity among its members, which helps to improve its performance. Although heterogeneous ensemble systems have achieved many successes in the batch classification setting, it is not trivial to extend them directly to the data stream setting. In this study, we propose a novel HEterogeneous Ensemble Selection (HEES) method, which dynamically selects an appropriate subset of base classifiers to predict data under the stream setting. We are inspired by the observation that a well-chosen subset of good base classifiers may outperform the whole ensemble system. Here, we define a good candidate as one that expresses not only high predictive performance but also high confidence in its prediction. Our selection process is thus divided into two sub-processes: accurate-candidate selection and confident-candidate selection. We define an accurate candidate in the stream context as a base classifier with high accuracy over the current concept, while a confident candidate as one with a confidence score higher than a certain threshold. In the first sub-process, we employ the prequential accuracy to estimate the performance of a base classifier at a specific time, while in the latter sub-process, we propose a new measure to quantify the predictive confidence and provide a method to learn the threshold incrementally. The final ensemble is formed by taking the intersection of the sets of confident classifiers and accurate classifiers. Experiments on a wide range of data streams show that the proposed method achieves competitive performance with lower running time in comparison to the state-of-the-art online ensemble methods

    Asymptotic periodic solutions of differential equations with infinite delay

    Full text link
    In this paper, by using the spectral theory of functions and properties of evolution semigroups, we establish conditions on the existence, and uniqueness of asymptotic 1-periodic solutions to a class of abstract differential equations with infinite delay of the form \begin{equation*} \frac{d u(t)}{d t}=A u(t)+L(u_t)+f(t) \end{equation*} where AA is the generator of a strongly continuous semigroup of linear operators, LL is a bounded linear operator from a phase space B\mathscr{B} to a Banach space XX, utu_t is an element of B\mathscr{B} which is defined as ut(θ)=u(t+θ)u_t(\theta)=u(t+\theta) for θ0\theta \leq 0 and ff is asymptotic 1-periodic in the sense that limt(f(t+1)\lim\limits_{t \rightarrow \infty}(f(t+1)- f(t))=0f(t))=0. A Lotka-Volterra model with diffusion and infinite delay is considered to illustrate our results.Comment: 13 page

    DEFEG: deep ensemble with weighted feature generation.

    Get PDF
    With the significant breakthrough of Deep Neural Networks in recent years, multi-layer architecture has influenced other sub-fields of machine learning including ensemble learning. In 2017, Zhou and Feng introduced a deep random forest called gcForest that involves several layers of Random Forest-based classifiers. Although gcForest has outperformed several benchmark algorithms on specific datasets in terms of classification accuracy and model complexity, its input features do not ensure better performance when going deeply through layer-by-layer architecture. We address this limitation by introducing a deep ensemble model with a novel feature generation module. Unlike gcForest where the original features are concatenated to the outputs of classifiers to generate the input features for the subsequent layer, we integrate weights on the classifiers’ outputs as augmented features to grow the deep model. The usage of weights in the feature generation process can adjust the input data of each layer, leading the better results for the deep model. We encode the weights using variable-length encoding and develop a variable-length Particle Swarm Optimisation method to search for the optimal values of the weights by maximizing the classification accuracy on the validation data. Experiments on a number of UCI datasets confirm the benefit of the proposed method compared to some well-known benchmark algorithms

    Ensemble of deep learning models with surrogate-based optimization for medical image segmentation.

    Get PDF
    Deep Neural Networks (DNNs) have created a breakthrough in medical image analysis in recent years. Because clinical applications of automated medical analysis are required to be reliable, robust and accurate, it is necessary to devise effective DNNs based models for medical applications. In this paper, we propose an ensemble framework of DNNs for the problem of medical image segmentation with a note that combining multiple models can obtain better results compared to each constituent one. We introduce an effective combining strategy for individual segmentation models based on swarm intelligence, which is a family of optimization algorithms inspired by biological processes. The problem of expensive computational time of the optimizer during the objective function evaluation is relieved by using a surrogate-based method. We train a surrogate on the objective function information of some populations and then use it to predict the objective values of each candidate in the subsequent populations. Experiments run on a number of public datasets indicate that our framework achieves competitive results within reasonable computation time

    A homogeneous-heterogeneous ensemble of classifiers.

    Get PDF
    In this study, we introduce an ensemble system by combining homogeneous ensemble and heterogeneous ensemble into a single framework. Based on the observation that the projected data is significantly different from the original data as well as each other after using random projections, we construct the homogeneous module by applying random projections on the training data to obtain the new training sets. In the heterogeneous module, several learning algorithms will train on the new training sets to generate the base classifiers. We propose four combining algorithms based on Sum Rule and Majority Vote Rule for the proposed ensemble. Experiments on some popular datasets confirm that the proposed ensemble method is better than several well-known benchmark algorithms proposed framework has great flexibility when applied to real-world applications. The proposed framework has great flexibility when applied to real-world applications by using any techniques that make rich training data for the homogeneous module, as well as using any set of learning algorithms for the heterogeneous module

    Deep Learning-Aided Multicarrier Systems

    Get PDF
    This paper proposes a deep learning (DL)-aided multicarrier (MC) system operating on fading channels, where both modulation and demodulation blocks are modeled by deep neural networks (DNNs), regarded as the encoder and decoder of an autoencoder (AE) architecture, respectively. Unlike existing AE-based systems, which incorporate domain knowledge of a channel equalizer to suppress the effects of wireless channels, the proposed scheme, termed as MC-AE, directly feeds the decoder with the channel state information and received signal, which are then processed in a fully data-driven manner. This new approach enables MC-AE to jointly learn the encoder and decoder to optimize the diversity and coding gains over fading channels. In particular, the block error rate of MC-AE is analyzed to show its higher performance gains than existing hand-crafted baselines, such as various recent index modulation-based MC schemes. We then extend MC-AE to multiuser scenarios, wherein the resultant system is termed as MU-MC-AE. Accordingly, two novel DNN structures for uplink and downlink MU-MC-AE transmissions are proposed, along with a novel cost function that ensures a fast training convergence and fairness among users. Finally, simulation results are provided to show the superiority of the proposed DL-based schemes over current baselines, in terms of both the error performance and receiver complexity

    Multi-label classification via incremental clustering on an evolving data stream.

    Get PDF
    With the advancement of storage and processing technology, an enormous amount of data is collected on a daily basis in many applications. Nowadays, advanced data analytics have been used to mine the collected data for useful information and make predictions, contributing to the competitive advantages of companies. The increasing data volume, however, has posed many problems to classical batch learning systems, such as the need to retrain the model completely with the newly arrived samples or the impracticality of storing and accessing a large volume of data. This has prompted interest on incremental learning that operates on data streams. In this study, we develop an incremental online multi-label classification (OMLC) method based on a weighted clustering model. The model is made to adapt to the change of data via the decay mechanism in which each sample's weight dwindles away over time. The clustering model therefore always focuses more on newly arrived samples. In the classification process, only clusters whose weights are greater than a threshold (called mature clusters) are employed to assign labels for the samples. In our method, not only is the clustering model incrementally maintained with the revealed ground truth labels of the arrived samples, the number of predicted labels in a sample are also adjusted based on the Hoeffding inequality and the label cardinality. The experimental results show that our method is competitive compared to several well-known benchmark algorithms on six performance measures in both the stationary and the concept drift settings

    Multi-layer heterogeneous ensemble with classifier and feature selection.

    Get PDF
    Deep Neural Networks have achieved many successes when applying to visual, text, and speech information in various domains. The crucial reasons behind these successes are the multi-layer architecture and the in-model feature transformation of deep learning models. These design principles have inspired other sub-fields of machine learning including ensemble learning. In recent years, there are some deep homogenous ensemble models introduced with a large number of classifiers in each layer. These models, thus, require a costly computational classification. Moreover, the existing deep ensemble models use all classifiers including unnecessary ones which can reduce the predictive accuracy of the ensemble. In this study, we propose a multi-layer ensemble learning framework called MUlti-Layer heterogeneous Ensemble System (MULES) to solve the classification problem. The proposed system works with a small number of heterogeneous classifiers to obtain ensemble diversity, therefore being efficiency in resource usage. We also propose an Evolutionary Algorithm-based selection method to select the subset of suitable classifiers and features at each layer to enhance the predictive performance of MULES. The selection method uses NSGA-II algorithm to optimize two objectives concerning classification accuracy and ensemble diversity. Experiments on 33 datasets confirm that MULES is better than a number of well-known benchmark algorithms
    corecore